144 research outputs found
Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation
The size of nuclei in histological preparations from excised breast tumors is
predictive of patient outcome (large nuclei indicate poor outcome).
Pathologists take into account nuclear size when performing breast cancer
grading. In addition, the mean nuclear area (MNA) has been shown to have
independent prognostic value. The straightforward approach to measuring nuclear
size is by performing nuclei segmentation. We hypothesize that given an image
of a tumor region with known nuclei locations, the area of the individual
nuclei and region statistics such as the MNA can be reliably computed directly
from the image data by employing a machine learning model, without the
intermediate step of nuclei segmentation. Towards this goal, we train a deep
convolutional neural network model that is applied locally at each nucleus
location, and can reliably measure the area of the individual nuclei and the
MNA. Furthermore, we show how such an approach can be extended to perform
combined nuclei detection and measurement, which is reminiscent of
granulometry.Comment: Conditionally accepted for MICCAI 201
Automatic nuclei segmentation in H&E stained breast cancer histopathology images
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. © 2013 Veta et al
Comparing computer-generated and pathologist-generated tumour segmentations for immunohistochemical scoring of breast tissue microarrays
BACKGROUND: Tissue microarrays (TMAs) have become a valuable resource for biomarker expression in translational research. Immunohistochemical (IHC) assessment of TMAs is the principal method for analysing large numbers of patient samples, but manual IHC assessment of TMAs remains a challenging and laborious task. With advances in image analysis, computer-generated analyses of TMAs have the potential to lessen the burden of expert pathologist review. METHODS: In current commercial software computerised oestrogen receptor (ER) scoring relies on tumour localisation in the form of hand-drawn annotations. In this study, tumour localisation for ER scoring was evaluated comparing computer-generated segmentation masks with those of two specialist breast pathologists. Automatically and manually obtained segmentation masks were used to obtain IHC scores for thirty-two ER-stained invasive breast cancer TMA samples using FDA-approved IHC scoring software. RESULTS: Although pixel-level comparisons showed lower agreement between automated and manual segmentation masks (κ=0.81) than between pathologists' masks (κ=0.91), this had little impact on computed IHC scores (Allred; [Image: see text]=0.91, Quickscore; [Image: see text]=0.92). CONCLUSIONS: The proposed automated system provides consistent measurements thus ensuring standardisation, and shows promise for increasing IHC analysis of nuclear staining in TMAs from large clinical trials
Roto-Translation Covariant Convolutional Networks for Medical Image Analysis
We propose a framework for rotation and translation covariant deep learning
using group convolutions. The group product of the special Euclidean
motion group describes how a concatenation of two roto-translations
results in a net roto-translation. We encode this geometric structure into
convolutional neural networks (CNNs) via group convolutional layers,
which fit into the standard 2D CNN framework, and which allow to generically
deal with rotated input samples without the need for data augmentation.
We introduce three layers: a lifting layer which lifts a 2D (vector valued)
image to an -image, i.e., 3D (vector valued) data whose domain is
; a group convolution layer from and to an -image; and a
projection layer from an -image to a 2D image. The lifting and group
convolution layers are covariant (the output roto-translates with the
input). The final projection layer, a maximum intensity projection over
rotations, makes the full CNN rotation invariant.
We show with three different problems in histopathology, retinal imaging, and
electron microscopy that with the proposed group CNNs, state-of-the-art
performance can be achieved, without the need for data augmentation by rotation
and with increased performance compared to standard CNNs that do rely on
augmentation.Comment: 8 pages, 2 figures, 1 table, accepted at MICCAI 201
Assessment of algorithms for mitosis detection in breast cancer histopathology images
The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues.
In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists
- …